H 1 Optimality Criteria for Lms and Backpropagation

نویسندگان

  • Babak Hassibi
  • Ali H. Sayed
چکیده

We have recently shown that the widely known LMS algorithm is an H 1 optimal estimator. The H 1 criterion has been introduced, initially in the control theory literature, as a means to ensure robust performance in the face of model uncertainties and lack of statistical information on the exogenous signals. We extend here our analysis to the nonlinear setting often encountered in neural networks, and show that the backpropagation algorithm is locally H 1 optimal. This fact provides a theoretical justiication of the widely observed excellent robustness properties of the LMS and backpropagation algorithms. We further discuss some implications of these results.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hoo Optimality Criteria for LMS and Backpropagation

We have recently shown that the widely known LMS algorithm is an H OO optimal estimator. The H OO criterion has been introduced, initially in the control theory literature, as a means to ensure robust performance in the face of model uncertainties and lack of statistical information on the exogenous signals. We extend here our analysis to the nonlinear setting often encountered in neural networ...

متن کامل

Statistical efficiency of adaptive algorithms

The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution...

متن کامل

H 1 Optimality of the LMS Algorithm 1

We show that the celebrated LMS (Least-Mean Squares) adaptive algorithm is H 1 optimal. The LMS algorithm has been long regarded as an approximate solution to either a stochastic or a deterministic least-squares problem, and it essentially amounts to updating the weight vector estimates along the direction of the instantaneous gradient of a quadratic cost function. In this paper we show that LM...

متن کامل

Null-steering LMS Dual-Polarised Adaptive Antenna Arrays for GPS

The implementation of a null-steering antenna array using dual polarised patch antennas is considered. Several optimality criterion for adjusting the array weights are discussed. The most effective criteria minimises the output power of the array subject to maintaining a right hand circular polarisation (RHCP) response on the reference antenna. An unconstrained form of this criteria is derived,...

متن کامل

Optimal Training Algorithms and their Relation to Backpropagation

We derive global H 1 optimal training algorithms for neural networks. These algorithms guarantee the smallest possible prediction error energy over all possible disturbances of xed energy, and are therefore robust with respect to model uncertainties and lack of statistical information on the exogenous signals. The ensuing es-timators are innnite-dimensional, in the sense that updating the weigh...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1994